140 research outputs found

    Health AI for Good Rather Than Evil? The Need for a New Regulatory Framework for AI-Based Medical Devices

    Get PDF
    Artificial intelligence (AI), especially its subset machine learning, has tremendous potential to improve health care. However, health AI also raises new regulatory challenges. In this Article, I argue that there is a need for a new regulatory framework for AI-based medical devices in the U.S. that ensures that such devices are reasonably safe and effective when placed on the market and will remain so throughout their life cycle. I advocate for U.S. Food and Drug Administration (FDA) and congressional actions. I focus on how the FDA could - with additional statutory authority - regulate AI-based medical devices. I show that the FDA incompletely regulates health AI-based products, which may jeopardize patient safety and undermine public trust. For example, the medical device definition is too narrow, and several risky health AI-based products are not subject to FDA regulation. Moreover, I show that most AI-based medical devices available on the U.S. market are 510(k)-cleared. However, the 510(k) pathway raises significant safety and effectiveness concerns. I thus propose a future regulatory framework for premarket review of medical devices, including AI-based ones. Further, I discuss two problems that are related to specific AI-based medical devices, namely opaque (“black-box”) algorithms and adaptive algorithms that can continuously learn, and I make suggestions on how to address them. Finally, I encourage the FDA to broaden its view and consider AI-based medical devices as systems, not just devices, and focus more on the environment in which they are deployed

    Transcript: Presentation on Data Privacy Questions in the Digital Health World

    Get PDF
    The following is a transcription from The Digital Health and Technology Symposium presented at Cleveland-Marshall College of Law by The Journal of Law & Health on Friday, April 8, 2022. This transcript has been lightly edited for clarity

    “Nutrition Facts Labels” for Artificial Intelligence/Machine Learning-Based Medical Devices—The Urgent Need for Labeling Standards

    Get PDF
    Artificial Intelligence (“AI”), particularly its subset Machine Learning (“ML”), is quickly entering medical practice. The U.S. Food and Drug Administration (“FDA”) has already cleared or approved more than 520 AI/ ML-based medical devices, and many more devices are in the research and development pipeline. AI/ML-based medical devices are not only used in clinics by health care providers but are also increasingly offered directly to consumers for use, such as apps and wearables. Despite their tremendous potential for improving health care, AI/ML-based medical devices also raise many regulatory issues. This Article focuses on one issue that has not received sustained attention in the legal or policy debate: labeling for AI/ML-based medical devices. Labeling is crucial to prevent harm to patients and consumers (e.g., by reducing the risk of bias) and ensure that users know how to properly use the device and assess its benefits, potential risks, and limitations. It can also support transparency to users and thus promote public trust in new digital health technologies. This Article is the first to identify and thoroughly analyze the unique challenges of labeling for AI/ML-based medical devices and provide solutions to address them. It establishes that there are currently no standards of labeling for AI/ML-based medical devices. This is of particular concern as some of these devices are prone to biases, are opaque (“black boxes”), and have the ability to continuously learn. This Article argues that labeling standards for AI/ML-based medical devices are urgently needed, as the current labeling requirements for medical devices and the FDA’s case-by-case approach for a few AI/ML-based medical devices are insufficient. In particular, it proposes what such standards could look like, including eleven key types of information that should be included on the label, ranging from indications for use and details on the data sets to model limitations, warnings and precautions, and privacy and security. In addition, this Article argues that “nutrition facts labels,” known from food products, are a promising label design for AI/MLbased medical devices. Such labels should also be “dynamic” (rather than static) for adaptive algorithms that can continuously learn. Although this Article focuses on AI/ML-based medical devices, it also has implications for AI/ ML-based products that are not subject to FDA regulation

    Digital Home Health During the COVID-19 Pandemic Challenges to Safety, Liability, and Informed Consent, and the Way to Move Forward

    Get PDF
    We argue that changing how postmarket studies are conducted and who evaluates them might mitigate some concerns over the agency’s increasing reliance upon RWE. Distributing the responsibility for designing, conducting, and assessing real world studies of medical devices and drugs beyond industry sponsors and the FDA is critical to producing – and acting upon – more clinically useful information. We explore how the DESI program provides a useful model for the governance of RWE today. We explain why the FDA’s Center for Devices and Radiological Health is the most promising site for a new DESI initiative inspired by the challenges of regulating drugs in the past.https://ideas.dickinsonlaw.psu.edu/book-contributions/1016/thumbnail.jp

    The need for a system view to regulate artificial intelligence/machine learning-based software as medical device

    Get PDF
    Artificial intelligence (AI) and Machine learning (ML) systems in medicine are poised to significantly improve health care, for example, by offering earlier diagnoses of diseases or recommending optimally individualized treatment plans. However, the emergence of AI/ML in medicine also creates challenges, which regulators must pay attention to. Which medical AI/ML-based products should be reviewed by regulators? What evidence should be required to permit marketing for AI/ML-based software as a medical device (SaMD)? How can we ensure the safety and effectiveness of AI/ML-based SaMD that may change over time as they are applied to new data? The U.S. Food and Drug Administration (FDA), for example, has recently proposed a discussion paper to address some of these issues. But it misses an important point: we argue that regulators like the FDA need to widen their scope from evaluating medical AI/ML-based products to assessing systems. This shift in perspective—from a product view to a system view—is central to maximizing the safety and efficacy of AI/ML in health care, but it also poses significant challenges for agencies like the FDA who are used to regulating products, not systems. We offer several suggestions for regulators to make this challenging but important transition

    The proposed EU Directives for AI liability leave worrying gaps likely to impact medical AI

    Get PDF
    Two newly proposed Directives impact liability for artificial intelligence in the EU: a Product Liability Directive (PLD) and an AI Liability Directive (AILD). While these proposed Directives provide some uniform liability rules for AI-caused harm, they fail to fully accomplish the EU’s goal of providing clarity and uniformity for liability for injuries caused by AI-driven goods and services. Instead, the Directives leave potential liability gaps for injuries caused by some black-box medical AI systems, which use opaque and complex reasoning to provide medical decisions and/or recommendations. Patients may not be able to successfully sue manufacturers or healthcare providers for some injuries caused by these black-box medical AI systems under either EU Member States’ strict or fault-based liability laws. Since the proposed Directives fail to address these potential liability gaps, manufacturers and healthcare providers may have difficulty predicting liability risks associated with creating and/or using some potentially beneficial black-box medical AI systems

    Privacy Aspects of Direct-to-Consumer Artificial Intelligence/Machine Learning health apps

    Get PDF
    Direct-To-Consumer Artificial Intelligence/Machine Learning health apps (DTC AI/ML health apps) are increasingly being made available for download in app stores. However, such apps raise challenges, one of which is providing adequate protection of consumers\u27 privacy. This article analyzes the privacy aspects of DTC AI/ML health apps and suggests how consumers\u27 privacy could be better protected in the United States. In particular, it discusses the Health Insurance Portability and Accountability Act of 1996 (HIPAA), the Federal Trade Commission (FTC) Act, the FTC\u27s Health Breach Notification Rule, the California Consumer Privacy Act of 2018, the California Privacy Rights Act of 2020, the Virginia Consumer Data Protection Act, the Colorado Privacy Act, and the EU General Data Protection Regulation (2016/679 – GDPR). This article concludes that much more work is needed to adequately protect the privacy of consumers using DTC AI/ML health apps. For example, while the FTC\u27s recent actions to protect consumers using DTC AI/ML health apps are laudable, consumer literacy needs to be much more promoted. Even if HIPAA is not updated, a U.S. federal privacy law that offers a high level of data protection—similar to the EU GDPR—could close many of HIPAA\u27s loopholes and ensure that American consumers\u27 data collected via DTC AI/ML health apps are better protected

    German Pharmaceutical Pricing: Lessons for the United States

    Get PDF
    To control pharmaceutical spending and improve access, the United States could adopt strategies similar to those introduced in Germany by the 2011 German Pharmaceutical Market Reorganization Act. In Germany, manufacturers sell new drugs immediately upon receiving marketing approval. During the first year, the German Federal Joint Committee assesses new drugs to determine their added medical benefit. It assigns them a score indicating their added benefit. New drugs comparable to drugs in a reference price group are assigned to that group and receive the same reimbursement, unless they are therapeutically superior. The National Association of Statutory Health Insurance Funds then negotiates with manufacturers the maximum reimbursement starting the 13th month, consistent with the drug\u27s added benefit assessment and price caps in other European countries. In the absence of agreement, an arbitration board sets the price. Manufacturers accept the price resolution or exit the market. Thereafter, prices generally are not increased, even for inflation. US public and private insurers control prices in diverse ways, but typically obtain discounts by designating certain drugs as preferred and by restricting patient access or charging high copayment for nonpreferred drugs. This article draws 10 lessons for drug pricing reform in US federal programs and private insurance

    Germany’s Digital Health Reforms in the COVID-19 Era: Lessons and Opportunities for Other Countries

    Get PDF
    Reimbursement is a key challenge for many new digital health solutions, whose importance and value have been highlighted and expanded by the current COVID-19 pandemic. Germany’s new Digital Healthcare Act (Digitale–Versorgung–Gesetz or DVG) entitles all individuals covered by statutory health insurance to reimbursement for certain digital health applications (i.e., insurers will pay for their use). Since Germany, like the United States (US), is a multi-payer health care system, the new Act provides a particularly interesting case study for US policymakers. We first provide an overview of the new German DVG and outline the landscape for reimbursement of digital health solutions in the US, including recent changes to policies governing telehealth during the COVID-19 pandemic. We then discuss challenges and unanswered questions raised by the DVG, ranging from the limited scope of the Act to privacy issues. Lastly, we highlight early lessons and opportunities for other countries

    Liability for Use of Artificial Intelligence in Medicine

    Get PDF
    While artificial intelligence has substantial potential to improve medical practice, errors will certainly occur, sometimes resulting in injury. Who will be liable? Questions of liability for AI-related injury raise not only immediate concerns for potentially liable parties, but also broader systemic questions about how AI will be developed and adopted. The landscape of liability is complex, involving health-care providers and institutions and the developers of AI systems. In this chapter, we consider these three principal loci of liability: individual health-care providers, focused on physicians; institutions, focused on hospitals; and developers
    • …
    corecore